2025-11-13 16:14:40,102 [ 60925 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-11-13 16:14:40,102 [ 60925 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:79, check_args_and_update_paths) 2025-11-13 16:14:40,102 [ 60925 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:90, check_args_and_update_paths) 2025-11-13 16:14:40,102 [ 60925 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:92, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_crx4y6 --privileged --dns-search='.' --memory=30709026816 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 --report-log=parallel0_0.jsonl --report-log-exclude-logs-on-passed-tests test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy]' test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability test_buffer_profile/test.py::test_buffer_profile test_buffer_profile/test.py::test_default_profile test_check_table_name_length/test.py::test_backward_compatibility test_check_table_name_length/test.py::test_check_table_name_length test_cleanup_after_start/test.py::test_old_dirs_cleanup test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop test_cluster_discovery/test_password.py::test_connect_with_password test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility test_compression_nested_columns/test.py::test_nested_compression_codec test_concurrent_queries_for_user_restriction/test.py::test_exception_message test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool test_config_reloader_interval/test.py::test_reload_config test_config_xml_main/test.py::test_xml_main_conf test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting test_ddl_alter_query/test.py::test_alter test_ddl_alter_query/test.py::test_ddl_queue_hostname_change test_ddl_worker_replicas/test.py::test_ddl_worker_replicas test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True]' test_distributed_ddl_password/test.py::test_alter test_distributed_ddl_password/test.py::test_truncate test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON test_executable_user_defined_function/test.py::test_executable_function_always_error_python test_executable_user_defined_function/test.py::test_executable_function_argument_python test_executable_user_defined_function/test.py::test_executable_function_bash test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash test_executable_user_defined_function/test.py::test_executable_function_parameter_python test_executable_user_defined_function/test.py::test_executable_function_python test_executable_user_defined_function/test.py::test_executable_function_query_cache test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python test_executable_user_defined_function/test.py::test_executable_function_signalled_python test_executable_user_defined_function/test.py::test_executable_function_slow_python test_executable_user_defined_function/test.py::test_executable_function_sum_json_python test_executable_user_defined_function/test.py::test_executable_function_sum_python test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] test_executable_user_defined_function/test.py::test_executable_function_always_error_python test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_default_role/test.py::test_alter_user test_accept_invalid_certificate/test.py::test_accept test_attach_partition_using_copy/test.py::test_all_replicated test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool test_buffer_profile/test.py::test_buffer_profile [gw3] [ 1%] PASSED test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept [gw3] [ 2%] PASSED test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default [gw3] [ 3%] PASSED test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject [gw3] [ 4%] PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject [gw3] [ 5%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw3] [ 6%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw9] [ 7%] PASSED test_buffer_profile/test.py::test_buffer_profile test_buffer_profile/test.py::test_default_profile [gw9] [ 8%] PASSED test_buffer_profile/test.py::test_default_profile [gw6] [ 9%] PASSED test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles [gw6] [ 10%] PASSED test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role [gw6] [ 11%] PASSED test_default_role/test.py::test_wrong_set_default_role [gw4] [ 12%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] test_check_table_name_length/test.py::test_backward_compatibility [gw4] [ 13%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility [gw4] [ 14%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] [gw1] [ 15%] PASSED test_executable_user_defined_function/test.py::test_executable_function_always_error_python test_executable_user_defined_function/test.py::test_executable_function_argument_python [gw4] [ 16%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] [gw1] [ 17%] PASSED test_executable_user_defined_function/test.py::test_executable_function_argument_python test_executable_user_defined_function/test.py::test_executable_function_bash [gw4] [ 18%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] [gw1] [ 19%] PASSED test_executable_user_defined_function/test.py::test_executable_function_bash test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python [gw4] [ 20%] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] [gw1] [ 21%] PASSED test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash [gw1] [ 22%] PASSED test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash test_executable_user_defined_function/test.py::test_executable_function_parameter_python [gw1] [ 23%] PASSED test_executable_user_defined_function/test.py::test_executable_function_parameter_python test_executable_user_defined_function/test.py::test_executable_function_python [gw1] [ 24%] PASSED test_executable_user_defined_function/test.py::test_executable_function_python test_executable_user_defined_function/test.py::test_executable_function_query_cache [gw1] [ 25%] PASSED test_executable_user_defined_function/test.py::test_executable_function_query_cache test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python [gw1] [ 26%] PASSED test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python test_executable_user_defined_function/test.py::test_executable_function_signalled_python [gw1] [ 27%] PASSED test_executable_user_defined_function/test.py::test_executable_function_signalled_python test_executable_user_defined_function/test.py::test_executable_function_slow_python [gw8] [ 28%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] [gw7] [ 29%] PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated [gw8] [ 30%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] [gw3] [ 31%] PASSED test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility [gw3] [ 32%] PASSED test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility [gw9] [ 33%] PASSED test_check_table_name_length/test.py::test_backward_compatibility test_check_table_name_length/test.py::test_check_table_name_length [gw9] [ 34%] PASSED test_check_table_name_length/test.py::test_check_table_name_length test_ddl_alter_query/test.py::test_alter test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF test_custom_settings/test.py::test_custom_settings [gw1] [ 35%] PASSED test_executable_user_defined_function/test.py::test_executable_function_slow_python test_executable_user_defined_function/test.py::test_executable_function_sum_json_python [gw0] [ 36%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] [gw1] [ 37%] PASSED test_executable_user_defined_function/test.py::test_executable_function_sum_json_python test_executable_user_defined_function/test.py::test_executable_function_sum_python [gw1] [ 38%] PASSED test_executable_user_defined_function/test.py::test_executable_function_sum_python [gw8] [ 39%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability test_file_cluster/test.py::test_count [gw9] [ 40%] PASSED test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting [gw9] [ 41%] PASSED test_custom_settings/test.py::test_illformed_setting test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability [gw0] [ 42%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] [gw8] [ 43%] PASSED test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection [gw3] [ 44%] PASSED test_ddl_alter_query/test.py::test_alter test_ddl_alter_query/test.py::test_ddl_queue_hostname_change [gw8] [ 45%] PASSED test_file_cluster/test.py::test_format_detection [gw1] [ 46%] PASSED test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability [gw3] [ 47%] PASSED test_ddl_alter_query/test.py::test_ddl_queue_hostname_change test_cleanup_after_start/test.py::test_old_dirs_cleanup test_cluster_discovery/test_password.py::test_connect_with_password [gw9] [ 48%] PASSED test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability test_compression_nested_columns/test.py::test_nested_compression_codec [gw0] [ 49%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] [gw4] [ 50%] PASSED test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON [gw1] [ 51%] PASSED test_cleanup_after_start/test.py::test_old_dirs_cleanup [gw0] [ 52%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field [gw8] [ 53%] PASSED test_cluster_discovery/test_password.py::test_connect_with_password test_config_reloader_interval/test.py::test_reload_config [gw9] [ 54%] PASSED test_compression_nested_columns/test.py::test_nested_compression_codec test_concurrent_queries_for_user_restriction/test.py::test_exception_message [gw8] [ 55%] PASSED test_config_reloader_interval/test.py::test_reload_config [gw0] [ 56%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] [gw7] [ 57%] PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas test_ddl_worker_replicas/test.py::test_ddl_worker_replicas [gw3] [ 58%] PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field [gw4] [ 59%] PASSED test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON test_config_xml_main/test.py::test_xml_main_conf [gw9] [ 60%] PASSED test_concurrent_queries_for_user_restriction/test.py::test_exception_message test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility [gw0] [ 61%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] [gw4] [ 62%] PASSED test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility [gw3] [ 63%] PASSED test_config_xml_main/test.py::test_xml_main_conf [gw0] [ 64%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] [gw0] [ 65%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] [gw1] [ 66%] PASSED test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf [gw1] [ 67%] PASSED test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf [gw7] [ 68%] PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool [gw5] [ 69%] FAILED test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree [gw8] [ 70%] PASSED test_ddl_worker_replicas/test.py::test_ddl_worker_replicas [gw2] [ 71%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids [gw7] [ 72%] PASSED test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool [gw0] [ 73%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] [gw2] [ 74%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container [gw2] [ 75%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree [gw2] [ 76%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 [gw2] [ 77%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 [gw6] [ 78%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] [gw2] [ 79%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 [gw0] [ 80%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] [gw0] [ 81%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] [gw0] [ 82%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] [gw0] [ 83%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] [gw0] [ 84%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] [gw0] [ 85%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] [gw0] [ 86%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] [gw0] [ 87%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] [gw5] [ 88%] FAILED test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk [gw0] [ 89%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] [gw6] [ 90%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] test_distributed_ddl_password/test.py::test_alter [gw0] [ 91%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] [gw6] [ 92%] PASSED test_distributed_ddl_password/test.py::test_alter test_distributed_ddl_password/test.py::test_truncate [gw6] [ 93%] PASSED test_distributed_ddl_password/test.py::test_truncate [gw0] [ 94%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] [gw0] [ 95%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] [gw0] [ 96%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] [gw0] [ 97%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] [gw0] [ 98%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] [gw5] [ 99%] FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated [gw5] [100%] FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated =================================== FAILURES =================================== _____________________________ test_all_replicated ______________________________ [gw5] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_all_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", True) test_attach_partition_using_copy/test.py:128: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1c5a621630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.5:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(DB::TableZnodeInfo const&, DB::LoadingStrictnessLevel, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>, bool, DB::ZooKeeperRetriesInfo const&) @ 0x0000000014143e9c E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a23d3b E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ---------------------------- Captured stderr setup ----------------------------- Command:[docker ps | wc -l] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py Cluster start called. is_up=False Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestattachpartitionusingcopy-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: replica1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/database Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: replica2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/database Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:5ccda723c1fc', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml pull] Stderr: zoo1 Skipped - Image is already being pulled by replica1 Stderr: zoo2 Skipped - Image is already being pulled by replica1 Stderr: zoo3 Skipped - Image is already being pulled by replica1 Stderr: replica2 Skipped - Image is already being pulled by replica1 Stderr: replica1 Pulling Stderr: replica1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/coordination'] Command:[docker compose --project-name roottestattachpartitionusingcopy-gw5 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr:time="2025-11-13T16:14:58Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestattachpartitionusingcopy-gw5_default Creating Stderr: Network roottestattachpartitionusingcopy-gw5_default Created Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Created Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Created Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Created Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Started Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Started Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Started Stderr:time="2025-11-13T16:14:58Z" level=debug msg="otel error" error="" Stderr:time="2025-11-13T16:14:58Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.1.2, port:2181, use_ssl:False Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.1.3, port:2181, use_ssl:False Connecting to 172.16.1.3(172.16.1.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.1.4, port:2181, use_ssl:False Connecting to 172.16.1.4(172.16.1.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml up -d --no-recreate] Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Running Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Running Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Running Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Created Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Created Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Started Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Started ClickHouse instance created get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica1, ip: 172.16.1.5... http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None ClickHouse replica1 started get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica2, ip: 172.16.1.6... http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/16c6f37f080179a9dae1204d9072af1fd033d596a4231964d1d5f026c3a6a0e3/json HTTP/1.1" 200 None ClickHouse replica2 started ------------------------------ Captured log setup ------------------------------ 2025-11-13 16:14:47.274000 [ 661 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.303000 [ 661 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 16:14:47.303000 [ 661 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-11-13 16:14:47.303000 [ 661 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-11-13 16:14:47.304000 [ 661 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.325000 [ 661 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.330000 [ 661 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-11-13 16:14:47.330000 [ 661 ] INFO : Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py (cluster.py:2738, start) 2025-11-13 16:14:47.330000 [ 661 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-11-13 16:14:47.354000 [ 661 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:14:47.384000 [ 661 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:14:47.408000 [ 661 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:14:47.408000 [ 661 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 16:14:47.429000 [ 661 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:14:47.451000 [ 661 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:14:47.474000 [ 661 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:14:47.474000 [ 661 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.496000 [ 661 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 16:14:47.496000 [ 661 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw5 (cluster.py:879, cleanup) 2025-11-13 16:14:47.496000 [ 661 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 16:14:47.519000 [ 661 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 16:14:47.519000 [ 661 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.545000 [ 661 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:14:47.545000 [ 661 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 16:14:47.545000 [ 661 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 16:14:47.545000 [ 661 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 16:14:47.568000 [ 661 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 16:14:47.568000 [ 661 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) 2025-11-13 16:14:47.568000 [ 661 ] DEBUG : Setup directory for instance: replica1 (cluster.py:2758, start) 2025-11-13 16:14:47.569000 [ 661 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 16:14:47.569000 [ 661 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 16:14:47.569000 [ 661 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 16:14:47.570000 [ 661 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 16:14:47.570000 [ 661 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 16:14:47.571000 [ 661 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/database (cluster.py:4758, create_dir) 2025-11-13 16:14:47.571000 [ 661 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs (cluster.py:4769, create_dir) 2025-11-13 16:14:47.572000 [ 661 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 16:14:47.572000 [ 661 ] DEBUG : Setup directory for instance: replica2 (cluster.py:2758, start) 2025-11-13 16:14:47.572000 [ 661 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 16:14:47.572000 [ 661 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 16:14:47.572000 [ 661 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 16:14:47.573000 [ 661 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 16:14:47.573000 [ 661 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 16:14:47.573000 [ 661 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/database (cluster.py:4758, create_dir) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs (cluster.py:4769, create_dir) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:5ccda723c1fc', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env (cluster.py:96, _create_env_file) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 16:14:47.574000 [ 661 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 16:14:47.575000 [ 661 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 16:14:47.584000 [ 661 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-11-13 16:14:47.585000 [ 661 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-11-13 16:14:58.070000 [ 661 ] DEBUG : Stderr: zoo1 Skipped - Image is already being pulled by replica1 (cluster.py:147, run_and_check) 2025-11-13 16:14:58.070000 [ 661 ] DEBUG : Stderr: zoo2 Skipped - Image is already being pulled by replica1 (cluster.py:147, run_and_check) 2025-11-13 16:14:58.070000 [ 661 ] DEBUG : Stderr: zoo3 Skipped - Image is already being pulled by replica1 (cluster.py:147, run_and_check) 2025-11-13 16:14:58.070000 [ 661 ] DEBUG : Stderr: replica2 Skipped - Image is already being pulled by replica1 (cluster.py:147, run_and_check) 2025-11-13 16:14:58.071000 [ 661 ] DEBUG : Stderr: replica1 Pulling (cluster.py:147, run_and_check) 2025-11-13 16:14:58.071000 [ 661 ] DEBUG : Stderr: replica1 Pulled (cluster.py:147, run_and_check) 2025-11-13 16:14:58.071000 [ 661 ] DEBUG : Setup ZooKeeper (cluster.py:2799, start) 2025-11-13 16:14:58.071000 [ 661 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/keeper3/coordination'] (cluster.py:2800, start) 2025-11-13 16:14:58.072000 [ 661 ] DEBUG : Command:[docker compose --project-name roottestattachpartitionusingcopy-gw5 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:121, run_and_check) 2025-11-13 16:14:59.006000 [ 661 ] DEBUG : Stderr:time="2025-11-13T16:14:58Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw5_default Creating (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw5_default Created (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:14:59.007000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr:time="2025-11-13T16:14:58Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-11-13 16:14:59.008000 [ 661 ] DEBUG : Stderr:time="2025-11-13T16:14:58Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-11-13 16:14:59.009000 [ 661 ] DEBUG : Wait ZooKeeper to start (cluster.py:2436, wait_zookeeper_to_start) 2025-11-13 16:14:59.009000 [ 661 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2005, get_instance_ip) 2025-11-13 16:14:59.012000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:14:59.013000 [ 661 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.1.2, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 16:14:59.015000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:14:59.015000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:14:59.146000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:14:59.147000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:14:59.379000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:14:59.380000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:14:59.694000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:14:59.695000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:15:00.231000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:15:00.232000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:15:01.446000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:15:01.447000 [ 661 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 16:15:03.738000 [ 661 ] INFO : Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:15:03.738000 [ 661 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 16:15:03.743000 [ 661 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 16:15:03.744000 [ 661 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 16:15:03.745000 [ 661 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 16:15:03.745000 [ 661 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 16:15:03.749000 [ 661 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 16:15:03.749000 [ 661 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 16:15:03.749000 [ 661 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 16:15:03.849000 [ 661 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 16:15:03.850000 [ 661 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 16:15:03.850000 [ 661 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2005, get_instance_ip) 2025-11-13 16:15:03.853000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:03.853000 [ 661 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.1.3, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 16:15:03.854000 [ 661 ] INFO : Connecting to 172.16.1.3(172.16.1.3):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:15:03.855000 [ 661 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 16:15:03.876000 [ 661 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 16:15:03.877000 [ 661 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 16:15:03.878000 [ 661 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 16:15:03.879000 [ 661 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 16:15:03.881000 [ 661 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 16:15:03.882000 [ 661 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 16:15:03.882000 [ 661 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 16:15:03.976000 [ 661 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 16:15:03.976000 [ 661 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 16:15:03.977000 [ 661 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2005, get_instance_ip) 2025-11-13 16:15:03.981000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:03.982000 [ 661 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.1.4, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 16:15:03.986000 [ 661 ] INFO : Connecting to 172.16.1.4(172.16.1.4):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 16:15:03.988000 [ 661 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 16:15:03.990000 [ 661 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 16:15:03.991000 [ 661 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 16:15:03.993000 [ 661 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 16:15:03.994000 [ 661 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 16:15:03.997000 [ 661 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 16:15:03.997000 [ 661 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 16:15:03.997000 [ 661 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 16:15:04.098000 [ 661 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 16:15:04.098000 [ 661 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 16:15:04.099000 [ 661 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2452, wait_zookeeper_nodes_to_start) 2025-11-13 16:15:04.100000 [ 661 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-11-13 16:15:04.100000 [ 661 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-11-13 16:15:05.348000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Running (cluster.py:147, run_and_check) 2025-11-13 16:15:05.348000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Running (cluster.py:147, run_and_check) 2025-11-13 16:15:05.348000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Running (cluster.py:147, run_and_check) 2025-11-13 16:15:05.348000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-11-13 16:15:05.349000 [ 661 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2005, get_instance_ip) 2025-11-13 16:15:05.352000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.352000 [ 661 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 16:15:05.354000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.355000 [ 661 ] DEBUG : Waiting for ClickHouse start in replica1, ip: 172.16.1.5... (cluster.py:3155, start) 2025-11-13 16:15:05.356000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.358000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.461000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.564000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.668000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.772000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.875000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/c2e969c4d5214a3ff29a2daababcab70ca7ded5af0985e4c368fb1f232908afd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.876000 [ 661 ] DEBUG : ClickHouse replica1 started (cluster.py:3159, start) 2025-11-13 16:15:05.876000 [ 661 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2005, get_instance_ip) 2025-11-13 16:15:05.877000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.877000 [ 661 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 16:15:05.879000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.880000 [ 661 ] DEBUG : Waiting for ClickHouse start in replica2, ip: 172.16.1.6... (cluster.py:3155, start) 2025-11-13 16:15:05.881000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw5-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.884000 [ 661 ] DEBUG : http://localhost:None "GET /v1.46/containers/16c6f37f080179a9dae1204d9072af1fd033d596a4231964d1d5f026c3a6a0e3/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:05.885000 [ 661 ] DEBUG : ClickHouse replica2 started (cluster.py:3159, start) ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ------------------------------ Captured log call ------------------------------- 2025-11-13 16:15:05.888000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:15:05.959000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:15:06.025000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:15:06.291000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:15:06.357000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:16:00.077000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:16:55.112000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) ____________________ test_cow_policy[cow_policy_multi_disk] ____________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_disk' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...R BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_disk'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7fc0c028d630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x0000000014a270e8 E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a2390c E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E ) helpers/cluster.py:3712: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ---------------------------- Captured stderr setup ----------------------------- Running tests in /ClickHouse/tests/integration/test_cow_policy/test.py Cluster start called. is_up=False Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestcowpolicy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestcowpolicy-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:9 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 9 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_cow_policy/configs/overrides.yaml'] to /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/database Setup logs dir /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml pull] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate] Stderr: Network roottestcowpolicy-gw6_default Creating Stderr: Network roottestcowpolicy-gw6_default Created Stderr: Container roottestcowpolicy-gw6-node-1 Creating Stderr: Container roottestcowpolicy-gw6-node-1 Created Stderr: Container roottestcowpolicy-gw6-node-1 Starting Stderr: Container roottestcowpolicy-gw6-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None ClickHouse node started ------------------------------ Captured log setup ------------------------------ 2025-11-13 16:15:06.199000 [ 694 ] INFO : Running tests in /ClickHouse/tests/integration/test_cow_policy/test.py (cluster.py:2738, start) 2025-11-13 16:15:06.199000 [ 694 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-11-13 16:15:06.224000 [ 694 ] DEBUG : Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:15:06.259000 [ 694 ] DEBUG : Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:15:06.283000 [ 694 ] DEBUG : Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:15:06.283000 [ 694 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 16:15:06.307000 [ 694 ] DEBUG : Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:15:06.326000 [ 694 ] DEBUG : Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:15:06.347000 [ 694 ] DEBUG : Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:15:06.348000 [ 694 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 16:15:06.368000 [ 694 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 16:15:06.368000 [ 694 ] DEBUG : No running containers for project: roottestcowpolicy-gw6 (cluster.py:879, cleanup) 2025-11-13 16:15:06.368000 [ 694 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 16:15:06.387000 [ 694 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 16:15:06.388000 [ 694 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:15:06.425000 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:15:06.426000 [ 694 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 16:15:06.426000 [ 694 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 16:15:06.426000 [ 694 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 16:15:06.453000 [ 694 ] DEBUG : Stdout:9 (cluster.py:145, run_and_check) 2025-11-13 16:15:06.453000 [ 694 ] DEBUG : Command:[docker volume prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:15:06.482000 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:15:06.482000 [ 694 ] DEBUG : Volumes pruned: 9 (cluster.py:915, cleanup) 2025-11-13 16:15:06.483000 [ 694 ] DEBUG : Setup directory for instance: node (cluster.py:2758, start) 2025-11-13 16:15:06.483000 [ 694 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 16:15:06.484000 [ 694 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 16:15:06.484000 [ 694 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 16:15:06.485000 [ 694 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 16:15:06.485000 [ 694 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_cow_policy/configs/overrides.yaml'] to /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 16:15:06.486000 [ 694 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/database (cluster.py:4758, create_dir) 2025-11-13 16:15:06.487000 [ 694 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs (cluster.py:4769, create_dir) 2025-11-13 16:15:06.487000 [ 694 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 16:15:06.487000 [ 694 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env (cluster.py:96, _create_env_file) 2025-11-13 16:15:06.488000 [ 694 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 16:15:06.488000 [ 694 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 16:15:06.488000 [ 694 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 16:15:06.489000 [ 694 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 16:15:06.501000 [ 694 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-11-13 16:15:06.501000 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-11-13 16:15:15.963000 [ 694 ] DEBUG : Stderr: node Pulling (cluster.py:147, run_and_check) 2025-11-13 16:15:15.964000 [ 694 ] DEBUG : Stderr: node Pulled (cluster.py:147, run_and_check) 2025-11-13 16:15:15.964000 [ 694 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-11-13 16:15:15.964000 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-11-13 16:15:16.618000 [ 694 ] DEBUG : Stderr: Network roottestcowpolicy-gw6_default Creating (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : Stderr: Network roottestcowpolicy-gw6_default Created (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Creating (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Created (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Starting (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Started (cluster.py:147, run_and_check) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-11-13 16:15:16.619000 [ 694 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2005, get_instance_ip) 2025-11-13 16:15:16.631000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.633000 [ 694 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 16:15:16.635000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.635000 [ 694 ] DEBUG : Waiting for ClickHouse start in node, ip: 172.16.2.2... (cluster.py:3155, start) 2025-11-13 16:15:16.639000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw6-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.643000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.754000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.860000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:16.963000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:17.066000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:17.171000 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/ffe807ca902fe83ad624818d9807258f98353e1c9360db6c03e5115a2d3ed4b8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 16:15:17.172000 [ 694 ] DEBUG : ClickHouse node started (cluster.py:3159, start) ----------------------------- Captured stderr call ----------------------------- Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node ------------------------------ Captured log call ------------------------------- 2025-11-13 16:15:17.175000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 16:16:11.059000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 16:17:05.689000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 16:18:00.910000 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) _____________________________ test_both_mergetree ______________________________ [gw5] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_both_mergetree(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:106: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1c5a621630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.5:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x0000000014a270e8 E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a2390c E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ------------------------------ Captured log call ------------------------------- 2025-11-13 16:17:50.311000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:17:50.377000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:17:50.443000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:17:50.510000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:17:50.576000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:18:45.491000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:19:42.407000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) ___________________ test_cow_policy[cow_policy_multi_volume] ___________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_volume' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_volume'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7fc0c028d630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x0000000014a270e8 E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a2390c E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E ) helpers/cluster.py:3712: Exception ----------------------------- Captured stderr call ----------------------------- Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node ------------------------------ Captured log call ------------------------------- 2025-11-13 16:18:01.264000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 16:18:58.181000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 16:19:55.800000 [ 694 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 16:20:50.418000 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) --------------------------- Captured stderr teardown --------------------------- Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml stop --timeout 20] Stderr: Container roottestcowpolicy-gw6-node-1 Stopping Stderr: Container roottestcowpolicy-gw6-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml down --volumes] Stderr: Container roottestcowpolicy-gw6-node-1 Stopping Stderr: Container roottestcowpolicy-gw6-node-1 Stopped Stderr: Container roottestcowpolicy-gw6-node-1 Removing Stderr: Container roottestcowpolicy-gw6-node-1 Removed Stderr: Network roottestcowpolicy-gw6_default Removing Stderr: Network roottestcowpolicy-gw6_default Removed Cleanup called Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestcowpolicy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestcowpolicy-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:7 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 7 ---------------------------- Captured log teardown ----------------------------- 2025-11-13 16:20:50.601000 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-11-13 16:20:56.892000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:20:56.893000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:20:56.893000 [ 694 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 16:20:56.911000 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/.env --project-name roottestcowpolicy-gw6 --file /ClickHouse/tests/integration/test_cow_policy/_instances-0-gw6/node/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-11-13 16:20:57.474000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:20:57.475000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:20:57.475000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:20:57.475000 [ 694 ] DEBUG : Stderr: Container roottestcowpolicy-gw6-node-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:20:57.475000 [ 694 ] DEBUG : Stderr: Network roottestcowpolicy-gw6_default Removing (cluster.py:147, run_and_check) 2025-11-13 16:20:57.475000 [ 694 ] DEBUG : Stderr: Network roottestcowpolicy-gw6_default Removed (cluster.py:147, run_and_check) 2025-11-13 16:20:57.476000 [ 694 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 16:20:57.506000 [ 694 ] DEBUG : Docker networks for project roottestcowpolicy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:20:57.531000 [ 694 ] DEBUG : Docker containers for project roottestcowpolicy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:20:57.561000 [ 694 ] DEBUG : Docker volumes for project roottestcowpolicy-gw6 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:20:57.562000 [ 694 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 16:20:57.590000 [ 694 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 16:20:57.590000 [ 694 ] DEBUG : No running containers for project: roottestcowpolicy-gw6 (cluster.py:879, cleanup) 2025-11-13 16:20:57.590000 [ 694 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 16:20:57.616000 [ 694 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 16:20:57.616000 [ 694 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:20:57.659000 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:20:57.659000 [ 694 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 16:20:57.659000 [ 694 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 16:20:57.659000 [ 694 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 16:20:57.690000 [ 694 ] DEBUG : Stdout:7 (cluster.py:145, run_and_check) 2025-11-13 16:20:57.691000 [ 694 ] DEBUG : Command:[docker volume prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:20:57.718000 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:20:57.719000 [ 694 ] DEBUG : Volumes pruned: 7 (cluster.py:915, cleanup) _______________________ test_not_work_on_different_disk ________________________ [gw5] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_not_work_on_different_disk(start_cluster): cleanup([replica1, replica2]) # Replace and move should not work on replace > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1c5a621630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.5:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x0000000014a270e8 E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a2390c E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ------------------------------ Captured log call ------------------------------- 2025-11-13 16:20:40.115000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:20:40.180000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:20:40.296000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:20:40.413000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:20:40.529000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:21:34.643000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:22:29.866000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) _______________________ test_only_destination_replicated _______________________ [gw5] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_only_destination_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:163: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f1c5a621630> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.5:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f52e9bb E 1. DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000000f503804 E 2. DB::(anonymous namespace)::hostByName(String const&) @ 0x000000000f505344 E 3. DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000000f5031f2 E 4. DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000000f5043ba E 5. std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000000f8c00f6 E 6. DB::HostResolver::update() @ 0x000000000f8bb12c E 7. DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000000f8bafc9 E 8. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000000f8c066d E 9. std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000000f8c04d2 E 10. DB::HostResolversPool::getResolver(String const&) @ 0x000000000f8bcfd3 E 11. DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000000f8b0fb5 E 12. DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000000f8ca076 E 13. DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000011376e5c E 14. DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000001137729a E 15. void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000001137b4ac E 16. DB::ReadWriteBufferFromHTTP::doWithRetries(std::function&&, std::function, bool) const @ 0x00000000113745bb E 17. DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000001137911f E 18. DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000012ca03c5 E 19. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3fc3 E 20. DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x0000000012ca3b87 E 21. DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000012c9dbbe E 22. DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x0000000012c35750 E 23. DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001465ad66 E 24. DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x0000000014a270e8 E 25. DB::create(DB::StorageFactory::Arguments const&) @ 0x0000000014a2390c E 26. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr, std::shared_ptr, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, DB::LoadingStrictnessLevel, bool) const @ 0x000000001402645b E 27. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, std::unique_ptr>&, DB::LoadingStrictnessLevel) @ 0x00000000135d99c0 E 28. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x00000000135ce682 E 29. DB::InterpreterCreateQuery::execute() @ 0x00000000135e1bd8 E 30. DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr&) @ 0x0000000013a0cf6b E 31. DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013a07984 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ------------------------------ Captured log call ------------------------------- 2025-11-13 16:23:26.962000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:23:27.077000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 16:23:27.194000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:23:27.310000 [ 661 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 16:23:27.426000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:24:22.907000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 16:25:17.533000 [ 661 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) --------------------------- Captured stderr teardown --------------------------- Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml stop --timeout 20] Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml down --volumes] Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Removed Stderr: Network roottestattachpartitionusingcopy-gw5_default Removing Stderr: Network roottestattachpartitionusingcopy-gw5_default Removed Cleanup called Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestattachpartitionusingcopy-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 ---------------------------- Captured log teardown ----------------------------- 2025-11-13 16:26:12.818000 [ 661 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-11-13 16:26:19.872000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:19.873000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:19.873000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:19.873000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:19.873000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:19.873000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:19.874000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:19.874000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:19.874000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:19.874000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:19.874000 [ 661 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 16:26:19.891000 [ 661 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 16:26:19.907000 [ 661 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/.env --project-name roottestattachpartitionusingcopy-gw5 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw5/replica2/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-11-13 16:26:20.493000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.494000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-replica1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:20.495000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo3-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw5-zoo2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw5_default Removing (cluster.py:147, run_and_check) 2025-11-13 16:26:20.496000 [ 661 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw5_default Removed (cluster.py:147, run_and_check) 2025-11-13 16:26:20.497000 [ 661 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 16:26:20.530000 [ 661 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw5 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 16:26:20.562000 [ 661 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 16:26:20.594000 [ 661 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw5 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 16:26:20.594000 [ 661 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 16:26:20.626000 [ 661 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 16:26:20.626000 [ 661 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw5 (cluster.py:879, cleanup) 2025-11-13 16:26:20.627000 [ 661 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 16:26:20.659000 [ 661 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 16:26:20.659000 [ 661 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 16:26:20.705000 [ 661 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 16:26:20.705000 [ 661 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 16:26:20.706000 [ 661 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 16:26:20.706000 [ 661 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 16:26:20.739000 [ 661 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 16:26:20.740000 [ 661 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) ----------------- generated report log file: parallel0_0.jsonl ----------------- ============================== slowest durations =============================== 186.23s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore 169.72s call test_attach_partition_using_copy/test.py::test_both_mergetree 169.22s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 166.68s call test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 165.76s call test_attach_partition_using_copy/test.py::test_only_destination_replicated 164.17s call test_attach_partition_using_copy/test.py::test_all_replicated 163.85s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 75.72s call test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas 73.93s call test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated 65.14s call test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 45.75s call test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop 32.39s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] 30.77s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 30.42s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] 30.31s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] 29.31s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] 25.69s call test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF 25.31s call test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON 23.19s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] 21.75s teardown test_ddl_alter_query/test.py::test_ddl_queue_hostname_change 20.83s call test_executable_user_defined_function/test.py::test_executable_function_slow_python 20.62s setup test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop 19.19s setup test_check_table_name_length/test.py::test_backward_compatibility 18.98s setup test_ddl_alter_query/test.py::test_alter 18.97s setup test_distributed_ddl_password/test.py::test_alter 18.63s setup test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool 18.62s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] 18.61s setup test_attach_partition_using_copy/test.py::test_all_replicated 18.24s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] 18.23s call test_config_xml_main/test.py::test_xml_main_conf 17.83s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] 17.82s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] 17.63s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] 17.57s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] 17.53s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] 17.52s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] 17.48s setup test_executable_user_defined_function/test.py::test_executable_function_always_error_python 17.28s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] 17.17s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] 17.16s setup test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility 16.98s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] 16.85s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] 16.68s setup test_cluster_discovery/test_password.py::test_connect_with_password 16.21s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] 16.16s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] 16.13s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 16.04s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] 16.03s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] 15.97s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] 15.57s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] 15.37s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] 15.00s setup test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability 14.69s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] 14.43s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 14.14s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] 13.52s setup test_compression_nested_columns/test.py::test_nested_compression_codec 13.52s setup test_default_role/test.py::test_alter_user 13.42s setup test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 13.40s setup test_buffer_profile/test.py::test_buffer_profile 13.40s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] 13.32s setup test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability 13.26s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 12.96s setup test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 12.69s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] 12.63s setup test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF 12.59s setup test_concurrent_queries_for_user_restriction/test.py::test_exception_message 12.45s setup test_accept_invalid_certificate/test.py::test_accept 11.57s setup test_custom_settings/test.py::test_custom_settings 11.24s call test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool 10.98s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 10.25s setup test_file_cluster/test.py::test_count 9.26s call test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool 9.16s setup test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility 9.07s call test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf 8.27s teardown test_file_cluster/test.py::test_format_detection 7.99s setup test_cleanup_after_start/test.py::test_old_dirs_cleanup 7.92s teardown test_attach_partition_using_copy/test.py::test_only_destination_replicated 7.45s teardown test_distributed_ddl_password/test.py::test_truncate 7.12s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 6.62s teardown test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON 6.54s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] 6.47s teardown test_check_table_name_length/test.py::test_check_table_name_length 6.35s call test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] 6.24s setup test_config_reloader_interval/test.py::test_reload_config 6.15s teardown test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility 5.80s call test_cleanup_after_start/test.py::test_old_dirs_cleanup 5.64s teardown test_compression_nested_columns/test.py::test_nested_compression_codec 5.59s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 5.11s call test_ddl_alter_query/test.py::test_ddl_queue_hostname_change 5.10s teardown test_accept_invalid_certificate/test.py::test_strict_reject_with_config 4.99s teardown test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 4.62s call test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility 4.25s teardown test_config_reloader_interval/test.py::test_reload_config 4.10s teardown test_buffer_profile/test.py::test_default_profile 4.03s teardown test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 4.00s teardown test_custom_settings/test.py::test_illformed_setting 3.85s teardown test_concurrent_queries_for_user_restriction/test.py::test_exception_message 3.79s teardown test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool 3.71s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 3.70s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] 3.61s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 3.50s teardown test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop 3.43s teardown test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability 3.16s call test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 3.04s call test_distributed_ddl_password/test.py::test_alter 2.80s teardown test_default_role/test.py::test_wrong_set_default_role 2.77s teardown test_cleanup_after_start/test.py::test_old_dirs_cleanup 2.69s teardown test_cluster_discovery/test_password.py::test_connect_with_password 2.67s teardown test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability 2.20s call test_distributed_ddl_password/test.py::test_truncate 1.98s teardown test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility 1.89s call test_compression_nested_columns/test.py::test_nested_compression_codec 1.82s call test_executable_user_defined_function/test.py::test_executable_function_sum_json_python 1.69s call test_executable_user_defined_function/test.py::test_executable_function_always_error_python 1.63s call test_file_cluster/test.py::test_format_detection 1.54s call test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability 1.50s call test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility 1.47s teardown test_executable_user_defined_function/test.py::test_executable_function_sum_python 1.40s call test_concurrent_queries_for_user_restriction/test.py::test_exception_message 1.37s call test_ddl_alter_query/test.py::test_alter 1.31s call test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python 1.21s call test_default_role/test.py::test_set_default_roles 1.12s call test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability 1.11s call test_config_reloader_interval/test.py::test_reload_config 1.07s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] 1.05s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] 1.04s call test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python 1.03s call test_executable_user_defined_function/test.py::test_executable_function_query_cache 1.01s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 0.93s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore 0.92s call test_default_role/test.py::test_alter_user 0.77s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 0.76s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] 0.75s call test_cluster_discovery/test_password.py::test_connect_with_password 0.73s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 0.67s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 0.66s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] 0.61s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] 0.61s call test_executable_user_defined_function/test.py::test_executable_function_parameter_python 0.58s call test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash 0.58s call test_executable_user_defined_function/test.py::test_executable_function_python 0.58s call test_executable_user_defined_function/test.py::test_executable_function_argument_python 0.58s call test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] 0.53s call test_executable_user_defined_function/test.py::test_executable_function_sum_python 0.53s call test_executable_user_defined_function/test.py::test_executable_function_bash 0.53s call test_custom_settings/test.py::test_custom_settings 0.48s call test_executable_user_defined_function/test.py::test_executable_function_signalled_python 0.46s call test_check_table_name_length/test.py::test_check_table_name_length 0.42s call test_check_table_name_length/test.py::test_backward_compatibility 0.33s call test_default_role/test.py::test_wrong_set_default_role 0.29s call test_file_cluster/test.py::test_count 0.23s call test_buffer_profile/test.py::test_default_profile 0.18s call test_buffer_profile/test.py::test_buffer_profile 0.18s call test_custom_settings/test.py::test_illformed_setting 0.17s call test_accept_invalid_certificate/test.py::test_strict_reject 0.17s call test_accept_invalid_certificate/test.py::test_default 0.13s call test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility 0.12s call test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.12s call test_accept_invalid_certificate/test.py::test_accept 0.12s call test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.12s call test_accept_invalid_certificate/test.py::test_connection_accept 0.07s setup test_default_role/test.py::test_wrong_set_default_role 0.07s setup test_default_role/test.py::test_set_default_roles 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_always_error_python 0.00s setup test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated 0.00s setup test_custom_settings/test.py::test_illformed_setting 0.00s setup test_config_xml_main/test.py::test_xml_main_conf 0.00s teardown test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] 0.00s teardown test_config_xml_main/test.py::test_xml_main_conf 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] 0.00s setup test_ddl_alter_query/test.py::test_ddl_queue_hostname_change 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_parameter_python 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] 0.00s teardown test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] 0.00s setup test_file_cluster/test.py::test_format_detection 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] 0.00s teardown test_custom_settings/test.py::test_custom_settings 0.00s setup test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf 0.00s teardown test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated 0.00s teardown test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_argument_python 0.00s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] 0.00s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_sum_python 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_python 0.00s teardown test_distributed_ddl_password/test.py::test_alter 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_signalled_python 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] 0.00s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] 0.00s teardown test_attach_partition_using_copy/test.py::test_all_replicated 0.00s teardown test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 0.00s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] 0.00s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] 0.00s teardown test_file_cluster/test.py::test_count 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] 0.00s setup test_accept_invalid_certificate/test.py::test_default 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] 0.00s setup test_accept_invalid_certificate/test.py::test_connection_accept 0.00s teardown test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] 0.00s teardown test_ddl_alter_query/test.py::test_alter 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_query_cache 0.00s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_bash 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_parameter_python 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] 0.00s setup test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool 0.00s setup test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] 0.00s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] 0.00s setup test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] 0.00s setup test_distributed_ddl_password/test.py::test_truncate 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] 0.00s teardown test_default_role/test.py::test_set_default_roles 0.00s setup test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_python 0.00s teardown test_check_table_name_length/test.py::test_backward_compatibility 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash 0.00s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 0.00s teardown test_accept_invalid_certificate/test.py::test_default 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] 0.00s setup test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 0.00s setup test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas 0.00s setup test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_query_cache 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_bash 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] 0.00s teardown test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_sum_json_python 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] 0.00s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] 0.00s setup test_buffer_profile/test.py::test_default_profile 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 0.00s setup test_executable_user_defined_function/test.py::test_executable_function_slow_python 0.00s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] 0.00s teardown test_buffer_profile/test.py::test_buffer_profile 0.00s teardown test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] 0.00s setup test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s teardown test_default_role/test.py::test_alter_user 0.00s teardown test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] 0.00s setup test_check_table_name_length/test.py::test_check_table_name_length 0.00s setup test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_argument_python 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject 0.00s teardown test_accept_invalid_certificate/test.py::test_accept 0.00s setup test_attach_partition_using_copy/test.py::test_only_destination_replicated 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_signalled_python 0.00s teardown test_accept_invalid_certificate/test.py::test_connection_accept 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_slow_python 0.00s teardown test_executable_user_defined_function/test.py::test_executable_function_sum_json_python 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_reject 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_connection_reject =========================== short test summary info ============================ FAILED test_attach_partition_using_copy/test.py::test_all_replicated - Except... FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] - Exce... FAILED test_attach_partition_using_copy/test.py::test_both_mergetree - Except... FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] - Ex... FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated PASSED test_accept_invalid_certificate/test.py::test_accept PASSED test_accept_invalid_certificate/test.py::test_connection_accept PASSED test_accept_invalid_certificate/test.py::test_default PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject PASSED test_accept_invalid_certificate/test.py::test_strict_reject PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config PASSED test_buffer_profile/test.py::test_buffer_profile PASSED test_buffer_profile/test.py::test_default_profile PASSED test_default_role/test.py::test_alter_user PASSED test_default_role/test.py::test_set_default_roles PASSED test_default_role/test.py::test_wrong_set_default_role PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy] PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy] PASSED test_executable_user_defined_function/test.py::test_executable_function_always_error_python PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default] PASSED test_executable_user_defined_function/test.py::test_executable_function_argument_python PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy] PASSED test_executable_user_defined_function/test.py::test_executable_function_bash PASSED test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy] PASSED test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python PASSED test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash PASSED test_executable_user_defined_function/test.py::test_executable_function_parameter_python PASSED test_executable_user_defined_function/test.py::test_executable_function_python PASSED test_executable_user_defined_function/test.py::test_executable_function_query_cache PASSED test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python PASSED test_executable_user_defined_function/test.py::test_executable_function_signalled_python PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] PASSED test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility PASSED test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility PASSED test_check_table_name_length/test.py::test_backward_compatibility PASSED test_check_table_name_length/test.py::test_check_table_name_length PASSED test_executable_user_defined_function/test.py::test_executable_function_slow_python PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False] PASSED test_executable_user_defined_function/test.py::test_executable_function_sum_json_python PASSED test_executable_user_defined_function/test.py::test_executable_function_sum_python PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] PASSED test_custom_settings/test.py::test_custom_settings PASSED test_custom_settings/test.py::test_illformed_setting PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False] PASSED test_file_cluster/test.py::test_count PASSED test_ddl_alter_query/test.py::test_alter PASSED test_file_cluster/test.py::test_format_detection PASSED test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability PASSED test_ddl_alter_query/test.py::test_ddl_queue_hostname_change PASSED test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False] PASSED test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF PASSED test_cleanup_after_start/test.py::test_old_dirs_cleanup PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False] PASSED test_cluster_discovery/test_password.py::test_connect_with_password PASSED test_compression_nested_columns/test.py::test_nested_compression_codec PASSED test_config_reloader_interval/test.py::test_reload_config PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False] PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field PASSED test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON PASSED test_concurrent_queries_for_user_restriction/test.py::test_exception_message PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False] PASSED test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility PASSED test_config_xml_main/test.py::test_xml_main_conf PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False] PASSED test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop PASSED test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf PASSED test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas PASSED test_ddl_worker_replicas/test.py::test_ddl_worker_replicas PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore PASSED test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False] PASSED test_distributed_ddl_password/test.py::test_alter PASSED test_distributed_ddl_password/test.py::test_truncate PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True] =================== 6 failed, 94 passed in 696.31s (0:11:36) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 492, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_crx4y6 --privileged --dns-search='.' --memory=30709026816 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 --report-log=parallel0_0.jsonl --report-log-exclude-logs-on-passed-tests test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-default]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[0-s3_no_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-default]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_native_copy]' 'test_backup_restore_s3/test_throttling.py::test_backup_scheduler_settings[1-s3_no_native_copy]' test_backward_compatibility/test_ip_types_binary_compatibility.py::test_ip_types_binary_compatibility test_backward_compatibility/test_memory_bound_aggregation.py::test_backward_compatability test_backward_compatibility/test_short_strings_aggregation.py::test_backward_compatability test_buffer_profile/test.py::test_buffer_profile test_buffer_profile/test.py::test_default_profile test_check_table_name_length/test.py::test_backward_compatibility test_check_table_name_length/test.py::test_check_table_name_length test_cleanup_after_start/test.py::test_old_dirs_cleanup test_cluster_discovery/test_auxiliary_keeper.py::test_cluster_discovery_with_auxiliary_keeper_startup_and_stop test_cluster_discovery/test_password.py::test_connect_with_password test_compatibility_merge_tree_settings/test.py::test_check_projections_compatibility test_compatibility_merge_tree_settings/test.py::test_config_overrides_compatibility test_compression_nested_columns/test.py::test_nested_compression_codec test_concurrent_queries_for_user_restriction/test.py::test_exception_message test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_in_empty_pool_replicated test_concurrent_ttl_merges/test.py::test_limited_ttl_merges_two_replicas test_concurrent_ttl_merges/test.py::test_no_ttl_merges_in_busy_pool test_config_reloader_interval/test.py::test_reload_config test_config_xml_main/test.py::test_xml_main_conf test_config_yaml_merge_keys/test.py::test_yaml_merge_keys_conf 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting test_ddl_alter_query/test.py::test_alter test_ddl_alter_query/test.py::test_ddl_queue_hostname_change test_ddl_worker_replicas/test.py::test_ddl_worker_replicas test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_cache-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_direct-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_complex[complex_key_hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_ranged[range_hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[cache-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[direct-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[flat-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-False-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple[hashed-True-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[cache-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[direct-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[flat-True-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-False-True]' 'test_dictionaries_all_layouts_separate_sources/test_mongo.py::test_simple_ssl[hashed-True-True]' test_distributed_ddl_password/test.py::test_alter test_distributed_ddl_password/test.py::test_truncate test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_OFF test_distributed_directory_monitor_split_batch_on_failure/test.py::test_distributed_background_insert_split_batch_on_failure_ON test_executable_user_defined_function/test.py::test_executable_function_always_error_python test_executable_user_defined_function/test.py::test_executable_function_argument_python test_executable_user_defined_function/test.py::test_executable_function_bash test_executable_user_defined_function/test.py::test_executable_function_input_nullable_python test_executable_user_defined_function/test.py::test_executable_function_non_direct_bash test_executable_user_defined_function/test.py::test_executable_function_parameter_python test_executable_user_defined_function/test.py::test_executable_function_python test_executable_user_defined_function/test.py::test_executable_function_query_cache test_executable_user_defined_function/test.py::test_executable_function_send_chunk_header_python test_executable_user_defined_function/test.py::test_executable_function_signalled_python test_executable_user_defined_function/test.py::test_executable_function_slow_python test_executable_user_defined_function/test.py::test_executable_function_sum_json_python test_executable_user_defined_function/test.py::test_executable_function_sum_python test_file_cluster/test.py::test_count test_file_cluster/test.py::test_format_detection -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 ' returned non-zero exit status 1.